Advanced topics

Unified exceptions

DataObjects.Net translates ADO.NET provider-specific exceptions to its own provider-independent exceptions. This should dramatically simplify any code relying on clasas on underlying exception, such as transactional reprocessing code.

Base class for any storage-level exception is named StorageException: It also serves as a base class for exceptions that are not related to SQL execution errors.

This class has many decendants. The most important are OperationTimeoutException, SyntaxErrorException, ReprocessableException, ConstraintViolationException.

Let’s look at them. The first is OperationTimeoutException. It’s thrown when current operation timed-out. This might happen if server became unavailable or internal timeout reached.

SyntaxErrorException is thrown when server didn’t accept the generated SQL query. Typically such error means that either RDBMS does not support particular feature or DataObjects.Net generated an invalid SQL.

ReprocessableException is a base class for exceptions related to transaction isolation errors. It has two decendants: DeadlockException and TransactionSerializationFailureException. First is thrown when dead-lock occured during execution and current transaction has been chosen as a victim to resolve dead-lcok. The second exception is thrown for other transaction isolation-related errors.

Finally, there is ConstraintViolationException. As the name suggests it wraps errors related to RDBMS constraints on data. It has three descendants for each constraint type: CheckConstraintViolationException, ReferentialConstraintViolationException, UniqueConstraintViolationException.

CheckConstraintViolationException is thrown when CHECK constraint is violated. It also thrown in the case of violation of NOT NULL constraint.

ReferentialConstraintViolationException is used for referential constraint errors. In other words, it’s thrown when FOREIGN KEY is not satisfied. This could mean inserting row that references non-existent row or removing row that is being referenced by other rows. DataObjects.Net handles references for you, so generally you should not get this exception.

UniqueConstraintViolationException wraps errors for PRIMARY KEY and UNIQUE constraints. Also it’s used for duplication errors with unique indexes.

Versions and concurrency control

DataObjects.Net supports both optimistic and pessimistic concurrency control:

  • Entity.Lock method and IQueryable<T>.Lock extension methods provide support for pessimistic concurrency control;
  • VersionInfo type and a set of supplementary classes (VersionSet, VersionValidator, VersionCapturer) provide support for optimistic concurrency control.

Pessimistic concurrency control: Lock method group

Lock methods put shared or exclusive lock on the specified rows in the database. Entity.Lock method puts lock on rows that correspond to Entity instance. Let’s take a look at quick example:

// Increments number of friends for the specified user
public void IncrementFriendsCountWithLock(User user)
{
    // Exclusively locks rows corresponding to the specified `User` entity.
    // If this object is already locked by other transaction an exception
    // would be thrown.
    user.Lock(LockMode.Exclusive, LockBehavior.ThrowIfLocked);

    // After lock has been obtained change to object
    user.FriendsCount++;
}

IQueryable<T>.Lock extension method puts lock on rows that correspond to the underlying query result. Here is example of its usage:

// Set IsArchied flag on all documents that were created earlier than
// specified date.
public void ArchiveOldDocuments(DateTime boundary)
{
    // Query the data and put exclusive lock on all obtained rows.
    var documents = session.Query.All<Document>
        .Where(doc => doc.Date < boundary)
        .Lock(LockMode.Exclusive, LockBehavior.ThrowIfLocked)
        .ToList();

    // Modify the document.
    foreach (var doc in documents)
        doc.IsArchived = true;
}

Lock() methods accept two arguments LockMode and LockBehavior. These control type of the lock to obtain and behavior in the case lock could not be obtained.

Locking is rather complex mechanism deeply integrated into almost any RDBMS to provide transaction isolation. If, after reading this part, you’ve got an imagination this is something simple, most likely you’re wrong. Transaction isloation-related concepts are frequently musinderstood. So if you feel you don’t fully udnerstand this (e.g. when locks are placed automatically, what types of locks are there, what is index range lock, what happens when you try to lock a resource that is already locked, what difference between locking and MVCC, etc.), we recommend you to read more about this. You can start e.g. from this article, and go further until you’ll be fluent with all the terms (you should already know the keywords).

Logging

Basics

DataObjects.Net’s logging infrastructure consists of 2 main components: loggers and log writers.

Loggers

  • * - special logger, logs everything
  • Xtensive.Orm - logs Session & Transaction-related events and exceptions.
  • Xtensive.Orm.Core - logs misc. internal events. Normally, nobody is interested in them except DataObjects.Net developers.
  • Xtensive.Orm.Building - logs events during the Domain building process.
  • Xtensive.Orm.Sql - logs SQL statements sent to database server.
  • Xtensive.Orm.Upgrade - logs events during database schema upgrade.

Log writers

  • Console - redirects output to application’s console window, if any. Useful for small & test projects.
  • DebugOnlyConsole - the same as Console but appends log data only when a project is run in Debug mode.
  • path_to_file - appends log data to a specified file. If file is absent, it will be created. Useful for development & production environments. path_to_file can be either absolute or relative to the application location.
  • None - writes to /dev/null.

Configuring log output

Configuration of built-in logging is made in application configuration file (app.config or web.config).

Logger configuration takes 2 parameters: logger name as source and log writer name as target.

<Xtensive.Orm>
  <domains>
    <domain name="Default".../>
  </domains>

  <logging>
    <log source="Xtensive.Orm" target="Console"/>
    <log source="Xtensive.Orm.Sql" target="C:\Debug\Sql.log"/>
  </logging>
</Xtensive.Orm>

The example of log:

2012-12-31 00:00:02,052 | DEBUG | Xtensive.Orm.Sql | Session 'Default, #9'. Creating connection 'sqlserver://*****'.
2012-12-31 00:00:02,052 | DEBUG | Xtensive.Orm.Sql | Session 'Default, #9'. Opening connection 'sqlserver://*****'.
2012-12-31 00:00:02,052 | DEBUG | Xtensive.Orm.Sql | Session 'Default, #9'. Beginning transaction @ ReadCommitted.
2012-12-31 00:00:02,068 | DEBUG | Xtensive.Orm.Sql | Session 'Default, #9'. SQL batch: SELECT [a].[Id], [a].[TypeId], [a].[Name], [a].[Code], ...
2012-12-31 00:00:02,068 | DEBUG | Xtensive.Orm.Sql | Session 'Default, #9'. Commit transaction.
2012-12-31 00:00:02,068 | DEBUG | Xtensive.Orm.Sql | Session 'Default, #9'. Closing connection 'sqlserver://*****'.

Built-in services

Some very specific tasks can’t be easily done with the help of public API. For those cases DataObjects.Net provides a set of so-called “internal” services that usually can be obtained through Session.Services endpoint or through static helpers. The services’ API surface is not final and may be changed in the future versions.

Note

Use Xtensive.Orm.Services namespace to refer to the services.

SessionStateAccessor

Exposes methods to internal state of Session.

using Xtensive.Orm.Services;

using (var session = Domain.OpenSession()) {
  using (var tx = session.OpenTransaction()) {
    var sessionAccessor = DirectStateAccessor.Get(session);

    // Number of entities in the cache
    int count = sessionAccessor.Count;

    // List all entities in session cache
    foreach(var entities in sessionAccessor)
      // apply some action

    // Resolve entity from session cache by key
    var entity = sessionAccessor[myKey];

    // Invalidates state of session cache
    sessionAccessor.Invalidate();

    tx.Complete();
  }
}

PersistentStateAccessor

Exposes methods to access internal state of a Persistent object. PersistentFieldState can be of two kinds: Loaded or Modified.

using Xtensive.Orm.Services;

using (var session = Domain.OpenSession()) {
  using (var tx = session.OpenTransaction()) {

    var animal = session.Query.All<Animal>().First();
    animal.Name = "Tiger";

    var accessor = DirectStateAccessor.Get(animal);

    // Checking the state of field
    var state = accessor.GetFieldState("Name");
    // state is PersistentFieldState.Modified

    tx.Complete();
  }
}

EntitySetStateAccessor

Exposes methods to internal state of EntitySet.

using Xtensive.Orm.Services;

using (var session = Domain.OpenSession()) {
  using (var tx = session.OpenTransaction()) {

    var person = session.Query.All<Person>().First();

    var accessor = DirectStateAccessor.Get(person.Pets);

    // Number of keys cached
    long count = accessor.Count;

    // Check whether count is available without query to database
    accessor.IsCountAvailable;

    // Check whether EntitySet is fully loaded from database
    accessor.IsFullyLoaded;

    // Check whether a key is cached or not
    accessor.Contains(myKey);

    // Enumerates all cached keys
    foreach(var key in accessor)
      // apply some action

    tx.Complete();
  }
}

DirectEntityAccessor

Exposes methods for creating instances of Persistent types and accessing their persistent fields.

using Xtensive.Orm.Services;

using (var session = Domain.OpenSession()) {
  using (var tx = session.OpenTransaction()) {

    var accessor = session.Get<DirectEntityAccessor>();

    // Methods for creating entities
    accessor.CreateEntity(typeof(Animal));
    accessor.CreateEntity(typeof(Animal), tuple);
    accessor.CreateEntity(myKey);

    // Methods for creating structures
    accessor.CreateStructure(typeof(Address));
    accessor.CreateStructure(typeof(Address), tuple);

    Animal animal = session.Query.All<Animal>().First();

    // Methods for accessing value of persistent fields
    FieldInfo nameField = Domain.Model.Types[typeof(Animal)].Fields["Name"];
    accessor.GetFieldValue(animal, nameField);
    accessor.SetFieldValue(animal, nameField, "Tiger");

    // Methods for accessing value of reference fields without fetching referenced entity
    FieldInfo ownerField = Domain.Model.Types[typeof(Animal)].Fields["Owner];
    Key ownerKey = accessor.GetReferenceKey(animal, ownerField);
    accessor.SetReferenceKey(animal, ownerField, newKey);

    tx.Complete();
  }
}

DirectEntitySetAccessor

Exposes methods for manipulating fields of EntitySet<T> type in generic way.

using Xtensive.Orm.Services;

using (var session = Domain.OpenSession()) {
  using (var tx = session.OpenTransaction()) {

    var accessor = session.Get<DirectEntitySetAccessor>();

    Person person = session.Query.All<Person>().First();
    FieldInfo petsField = Domain.Model.Types[typeof(Person)].Fields["Pets"];

    // Accessing instance of EntitySet
    accessor.GetEntitySet(person, petsField);

    // Methods for manipulating content of EntitySet
    accessor.Add(person, petsField, myAnimal);
    accessor.Remove(person, petsField, myAnimal);
    accessor.Clear(person, petsField);

    tx.Complete();
  }
}

DirectSqlAccessor

Exposes methods to access low-level objects like connection, command, transaction.

using Xtensive.Orm.Services;

using (var session = domain.OpenSession()) {
  using (var tx = session.OpenTransaction()) {

    var accessor = session.Services.Demand<DirectSqlAccessor>();

    var command = accessor.CreateCommand();
    command.CommandText = "DELETE FROM [dbo].[Animal];";
    command.ExecuteNonQuery();

    // It is a good idea to invalidate session cache after any direct manipulation
    DirectStateAccessor.Get(session).Invalidate();

    t.Complete();
  }
}

QueryFormatter

Provides methods for formatting LINQ queries.

using Xtensive.Orm.Services;

using (var session = Domain.OpenSession()) {
  using (var tx = session.OpenTransaction()) {

    var query = session.Query.All<FakeClass>().Where(f => f.Id > 0);
    var formatter = session.Services.Get<QueryFormatter>();

    // Query is translated to SQL
    Console.WriteLine(formatter.ToSqlString(query));

    // Query is formatted in C# expression notation
    Console.WriteLine(formatter.ToString(query));

    // Creates DbCommand based on query
    var command = formatter.ToDbCommand(query);

    tx.Complete();
  }
}

Customizing LINQ translation

LINQ translator in DataObjects.Net handles wide variety of methods of base class library types, such as String.Substring():

var substrings = session.Query.All<Person>()
  .Select(p => p.Name.Substring(2, 3));

LINQ translation pipeline is not limited by this set of methods. There are two ways of extending it:

  • Implementing custom SQL compilers. Such compilers define translation of a particular method call, property or field access to SQL DOM expression.
  • Implementing custom LINQ expression rewriters. These allow rewriting expressions that use non-persistent properties, methods and fields to expressions that are known to DataObjects.Net LINQ translator.

First of all, you must choose members to create custom compilers for and declare compiler container type. Compiler container is a special static class exposing member compilers as its static methods. It must be marked with [CompilerContainer] attribute:

[CompilerContainer(typeof (Expression))]
public static class CustomLinqCompilerContainer
{
}

Dependently on required compiler type you should pass the following argument to [CompilerContainer] constructor:

  • typeof (Xtensive.Sql.Dml.SqlExpression) to write SQL compiler
  • typeof (System.Linq.Expressions.Expression) for LINQ expression rewriter

Now you can write your own member compilation methods (member compilers) inside it. To define a compilation method (compiler) you need to create a static method and mark it with [Compiler] attribute. Each member compiler matches a single member of a particular type. Each time this member is encountered in LINQ expression corresponding compiler is invoked to provide transformation. Member compiler should have a signature based on the following rules:

  • If target member is a generic method or target member belongs to a generic type MemberInfo parameter should be added. This parameter is used to pass actual member for which this compiler is invoked. It allows determining actual generic parameters which where used to construct a particular generic member.
  • If target member is an instance member the corresponding parameter should be added. This parameter defines expression representing instance which is used for accessing member.
  • If target member is a method or indexed property you should add as many parameters as target member has.
[Compiler(typeof(CustomSqlCompilerStringExtensions),
  "BuildAddressString",
  TargetKind.Method | TargetKind.Static)]
public static SqlExpression BuildAddressString(
  SqlExpression countryExpression,
  SqlExpression streetExpression,
  SqlExpression buildingExpression)

[Compiler] attribute parameters:

  • Type where member is defined.
  • Name of the member
  • Member kind: static or instance; method, property or field an so on.

You can (but not required to) apply [Type] attribute to any argument in a compiler method declaration. When [Type] attributes are provided, compiler resolver will consider argument types specified there during the matching process. Such declarations allow creating different compilers for different overloads of the same method. If there are no overloaded methods or overloading is based solely on number of parameters such declarations are not required.

Finally, you must register compiler container in the domain configuration:

var config = new DomainConfiguration("sqlserver://localhost/DO40-Tests");
config.Types.Register(typeof (CustomLinqCompilerContainer));

If all these steps are completed, DataObjects.Net will use your compiler container in the corresponding domain.

Customizing compilation to SQL

Result of compilation of any .NET expression to SQL is SqlExpression object from Xtensive.Sql.Dml namespace. SQL compilers are dealing with internal abstractions (Tuple field values), so there are no entities, structures and other high-level objects. The only types you can deal with are primitive types supported by DataObjects.Net, such as String and DateTime.

To illustrate the usage of custom SQL compiler, let’s imagine we want to extend string type with GetThirdChar() method returning 3rd character in the string.

First of all, we need this method itself:

public static class CustomCompilerStringExtensions
{
  public static char GetThirdChar(this string source)
  {
    return source[2];
  }
}

A query using it:

var thirdChars = session.Query.All<Person>().Select(p => p.Name.GetThirdChar());

Let’s write its compiler:

[CompilerContainer(typeof(SqlExpression))]
public static class CustomStringCompilerContainer
{
  [Compiler(typeof (CustomCompilerStringExtensions),
    "GetThirdChar", TargetKind.Method | TargetKind.Static)]
  public static SqlExpression GetThirdChar(SqlExpression _this)
  {
    return SqlDml.Substring(_this, 2, 1);
  }
}

If this compiler is registered in the domain, SQL DOM translator will convert any GetThirdChar() method call to SqlDml.Substring(_this, 2, 1) expression.

Let’s imagine another case: we want to build address string from its components:

public static string BuildAddressString(
    string country, string city, string building)
{
  return string.Format("{0}, {1}-{2}", country, city, building);
}

The compiler for this method:

[Compiler(typeof(CustomSqlCompilerStringExtensions),
  "BuildAddressString",
  TargetKind.Method | TargetKind.Static)]
public static SqlExpression BuildAddressString(
  SqlExpression countryExpression,
  SqlExpression streetExpression,
  SqlExpression buildingExpression)
{
  return SqlDml.Concat(
    countryExpression, SqlDml.Literal(", "),
    streetExpression, SqlDml.Literal("-"), buildingExpression);
}

Now this method can be used in LINQ queries:

var addresses = session.Query.All<Person>().Select(p =>
  CustomSqlCompilerStringExtensions.BuildAddressString(
  p.Address.Country, p.Address.City, p.Address.Building));

Finally, let’s try to create a compiler for built-in GetHashCode() method of string type. Since method is built-in, there should be just its compiler:

[Compiler(typeof(string), "GetHashCode", TargetKind.Method)]
public static SqlExpression GetHashCode(SqlExpression _this)
{
  // Return string length as its hash code
  return SqlDml.CharLength(_this);
}

Note that [Compiler] attribute uses typeof(string) as its first parameter, because GetHashCode() method we’re going to compile is overriden in string type.

This compiler makes possible to run the following query:

var hashCodes = session.Query.All<Person>()
  .OrderBy(p=>p.Id)
  .Select(p => p.Address.Country.GetHashCode());

Writing custom LINQ expression rewriter

LINQ expression rewriters provide much more convenient way of extending the LINQ translator:

  • SQL compilation is one of final steps in LINQ translation pipeline. So the abstractions you deal with (or return) here are SQL DOM primitives operating with primitive data types.
  • But LINQ rewriters operate on the first stage of translation, so all the expressions from original LINQ query are available there. Moreover, you can use (e.g. return) any expressions supported by DataObjects.Net LINQ translator here.

LINQ rewriters translate expressions to expressions, so you must apply [CompilerContainer(Expression)] attribute to a rewriter type and use Expression as common argument & result type in its methods.

[CompilerContainer(typeof (Expression))]
public static class CustomLinqCompilerContainer

Let’s extend persistent Person type with non-persistent FullName property:

public string FullName
{
  get { return string.Format("{0} {1}", FirstName, LastName); }
}

Its LINQ rewriter:

[Compiler(typeof (Person), "FullName", TargetKind.PropertyGet)]
public static Expression FullName(Expression personExpression)
{
  var spaceExpression = Expression.Constant(" ");
  var firstNameExpression = Expression.Property(personExpression, "FirstName");
  var lastNameExpression = Expression.Property(personExpression, "lastName");
  var methodInfo = typeof (string).GetMethod("Concat",
    new[] {typeof (string), typeof (string), typeof (string)});
  var concatExpression = Expression.Call(
    Expression.Constant(null, typeof(string)),
    methodInfo, firstNameExpression, spaceExpression, lastNameExpression);
  return concatExpression;
}

There is a way to reduce rewriter’s complexity – we can create LambdaExpression and bind its parameters:

[Compiler(typeof (Person), "FullName", TargetKind.PropertyGet)]
public static Expression FullName(Expression personExpression)
{
  // Since "ex" type is specified, C# compiler
  // allows to use Person properties:
  Expression<Func<Person, string>> ex =
    person => person.FirstName + " " + person.LastName;

  // Binding lambda parameters replaces parameter usage in lambda.
  // In this case resulting expression body looks like this:
  // personExpression.FirstName + " " + personExpression.LastName
  return ex.BindParameters(personExpression);
}

As you already know, we must register this rewriter:

var config = new DomainConfiguration("sqlserver://localhost/DO40-Tests");
config.Types.Register(typeof (CustomLinqCompilerContainer));

Now FullName property can be used in LINQ queries:

var fullNames = session.Query.All<Person>()
  .OrderBy(p => p.Id)
  .Select(p => p.FullName);

Now let’s imagine we need a method addding custom prefix to LastName property:

public string PrefixLastName(string prefix)
{
  return string.Format("{0}{1}", prefix, LastName);
}

Custom compiler for this method:

[Compiler(typeof (Person), "PrefixLastName", TargetKind.Method)]
public static Expression PrefixLastName(
  Expression personExpression, Expression prefixExpression)
{
  Expression<Func<Person, string, string>> ex
    = (person, prefix) => prefix + person.LastName;
  return ex.BindParameters(personExpression, prefixExpression);
}

Now it’s possible to use PrefixLastName method in LINQ queries:

var resultStrings = session.Query.All<Person>()
  .OrderBy(p => p.Id)
  .Select(p => p.PrefixLastName("Mr. "));
var resultStrings = session.Query.All<Person>()
  .OrderBy(p => p.Id)
  .Select(p => p.PrefixLastName(p.Id.ToString()));

All custom compilers must be deterministic: they should produce the same resulting expression for the same input. If this condition is violated, the query involving such a compiler will behave improperly while being used inside Session.Query.Execute() method. This does not mean the result of evaluation of returned expression must be deterministic as well. So you can use e.g. DateTime.Now (non-deterministic function) there – but you must ensure the expression you return is the same for the same input.

Multi-schema or multi-database domains

DataObjects.Net allows to segment Domain model across several schemas or/and databases for RDBMSs that have support for multiple schemas/databases. Currently, MS SQL Server fully supports it, PostgreSQL and Oracle support multi-schema domains.

Lets say persistent types are structured into a number of namespaces according to the role they play - Personnel, Sales, Production, Purchasing and so on, it is possible to give each part a dedicated schema.

To do so Domain should be configured in certain way.

Multi-schema domain

First, set default schema for domain

var domainConfiguration = new DomainConfiguration("sqlserver://dotest:dotest@localhost/DO-Tests");
domainConfiguration.DefaultSchema = "dbo";

Then, register types from namespaces

var modelNamespaces = new string[] {
  Model.Personnel,
  Model.Sales,
  Model.Production,
  Model.Purchasing,
};

foreach(var ns in modelNamespaces) {
  domainConfiguration.Types
    .Register(typeof(Model.Personnel.Employee).Assembly, ns);
}

or in the configuration file

<domain name="DefaultDomain"
  connectionUrl="sqlserver://dotest:dotest@localhost/DO-Tests"
  defaultSchema="dbo">
    <types>
      <add assembly="Model" namespace="Model.Personnel"/>
      <add assembly="Model" namespace="Model.Sales"/>
      <add assembly="Model" namespace="Model.Production"/>
      <add assembly="Model" namespace="Model.Purchasing"/>
    </types>
</domain>

Default schema will contain tables of types from namespaces which have no map to a certain schema and also system information - Key generators (tables or sequences), Metadata.Xxx tables.

After this, add mapping rule for each part of domain model

// namespace-to-schema map
// Model.Personnel -> Personnel
// Model.Sales -> Sales
// Model.Production -> Production
// Model.Purchasing -> Purchasing

foreach(var ns in modelNamespaces) {
  var schemaName = ns.Split('.')[1]
  domainConfiguration.MappingRules
    .Map(ns).ToSchema(schemaName);
}

or in the configuration file

<domain name="DefaultDomain"
  connectionUrl="sqlserver://dotest:dotest@localhost/DO-Tests"
  defaultSchema="dbo">
    <types>
      <add assembly="Model" namespace="Model.Personnel"/>
      <add assembly="Model" namespace="Model.Sales"/>
      <add assembly="Model" namespace="Model.Production"/>
      <add assembly="Model" namespace="Model.Purchasing"/>
    </types>
    <mappingRules>
      <rule namespace="Model.Personnel" schema="Personnel" />
      <rule namespace="Model.Sales" schema="Sales" />
      <rule namespace="Model.Production" schema="Production" />
      <rule namespace="Model.Purchasing" schema="Purchasing" />
    </mappingRules>
</domain>

Notice that no namespace is mapped to default schema, it was done so only for educational purpose, in real configurations default schema can be used as any other schema and contain parts of model.

If parts of domain model are divided into various assemblies then these assemblies can be mapped to schemas as well. In this case mapping rule can be configured like so

var modelAssemblies = new Assembly[] {
  typeof(Model.Personnel.Employee).Assembly,
  typeof(Model.Sales.Customer).Assembly,
  typeof(Model.Production.Product).Assembly,
  typeof(Model.Purchasing.ShipMethod).Assembly,
};

foreach(var assembly in modelAssemblies) {
  domainConfiguration.Types.Register(assembly);
}

// some code

// assembly-to-schema map
// Model.Personnel -> Personnel
// Model.Sales -> Sales
// Model.Production -> Production
// Model.Purchasing -> Purchasing

domainConfiguration.MappingRules
  .Map(typeof(Model.Personnel.Employee).Assembly)
  .ToSchema("Personnel");

domainConfiguration.MappingRules
  .Map(typeof(Model.Sales.Customer).Assembly)
  .ToSchema("Sales");

domainConfiguration.MappingRules
  .Map(typeof(Model.Production.Product).Assembly)
  .ToSchema("Production");

domainConfiguration.MappingRules
  .Map(typeof(Model.Purchasing.ShipMethod).Assembly)
  .ToSchema("Purchasing");

or in the configuration file

<domain name="DefaultDomain"
  connectionUrl="sqlserver://dotest:dotest@localhost/DO-Tests"
  defaultSchema="dbo">
    <types>
      <add assembly="Model.Personnel"/>
      <add assembly="Model.Sales"/>
      <add assembly="Model.Production"/>
      <add assembly="Model.Purchasing"/>
    </types>
    <mappingRules>
      <rule assembly="Model.Personnel" schema="Personnel" />
      <rule assembly="Model.Sales" schema="Sales" />
      <rule assembly="Model.Production" schema="Production" />
      <rule assembly="Model.Purchasing" schema="Purchasing" />
    </mappingRules>
</domain>

Multi-database domains

DomainConfiguration has Databases collection to declare ``DatabaseConfiguration``s. Such configurations are required to be declared for multi-database domains.

DatabaseConfiguration has following properties:

  • Name - logical name of database, it can be either real name or alias.
  • RealName - actual name of database, need to be specified if Name was alias,

otherwise, can be ommited because Name will be used as actual name of database. - MinTypeId - minimum type indentifier value for persistent types in the database (optional). - MaxTypeId - maximum type indentifier value for persistent types in the database (optional).

MinTypeId and MaxTypeId are optional and might be used to set up specific ranges of type identifiers per-database.

Thus, multi-database domain configuration may look like so

var domainConfiguration = new DomainConfiguration("sqlserver://dotest:dotest@localhost/master");
domainConfiguration.DefaultDatabase = "DO-Tests";
domainConfiguration.DefaultSchema = "dbo";

var modelAssemblies = new Assembly[] {
  typeof(Model.Personnel.Employee).Assembly,
  typeof(Model.Sales.Customer).Assembly,
  typeof(Model.Production.Product).Assembly,
  typeof(Model.Purchasing.ShipMethod).Assembly,
};

foreach(var assembly in modelAssemblies) {
  domainConfiguration.Types.Register(assembly);
}

domainConfiguration.Databases
  .Add(new DatabaseConfiguration("main") {
     RealName = "DO-Tests"
  });

domainConfiguration.Databases
  .Add(new DatabaseConfiguration("additional") {
    RealName = "DO-Tests-1"
  });

// assembly-to-schema map
// Model.Personnel -> DO-Tests
// Model.Sales -> DO-Tests
// Model.Production -> DO-Tests-1
// Model.Purchasing -> DO-Tests-1

// real name or alias can be used in rules
domainConfiguration.MappingRules
  .Map(typeof(Model.Personnel.Employee).Assembly)
  .ToDatabase("DO-Tests");

domainConfiguration.MappingRules
  .Map(typeof(Model.Sales.Customer).Assembly)
  .ToDatabase("main");
domainConfiguration.MappingRules
  .Map(typeof(Model.Production.Product).Assembly)
  .ToDatabase("DO-Tests-1");
domainConfiguration.MappingRules
  .Map(typeof(Model.Purchasing.ShipMethod).Assembly)
  .ToDatabase("additional");

or in the configuration file

<domain name="DefaultDomain"
  connectionUrl="sqlserver://dotest:dotest@localhost/master"
  defaultSchema="dbo"
  defaultDatabase="DO-Tests">
    <databases>
      <database name="main" realName="DO-Tests" />
      <database name="additional" realName="DO-Tests-1" />
    </databases>
    <types>
      <add assembly="Model.Personnel"/>
      <add assembly="Model.Sales"/>
      <add assembly="Model.Production"/>
      <add assembly="Model.Purchasing"/>
    </types>
    <mappingRules>
      <rule assembly="Model.Personnel" database="DO-Tests" />
      <rule assembly="Model.Sales" database="main" />
      <rule assembly="Model.Production" database="DO-Tests-1"/>
      <rule assembly="Model.Purchasing" database="additional"/>
    </mappingRules>
</domain>

Such configuration divides persistent types between default schema of ‘DO-Tests’ database and default schema of ‘DO-Tests-1’ database according to the mapping rules. Each database will also contain Key generators that the containing model part uses and system tables with information about this part.

The most complicated case is multi-database Domain which uses several schemas in each database. Its a combination of previous two cases so configuration may look like so

var domainConfiguration = new DomainConfiguration("sqlserver://dotest:dotest@localhost/master");
domainConfiguration.DefaultDatabase = "DO-Tests";
domainConfiguration.DefaultSchema = "dbo";

domainConfiguration.Databases.Add(
  new DatabaseConfiguration("main") { RealName = "DO-Tests" });
domainConfiguration.Databases.Add(
  new DatabaseConfiguration("additional") { RealName = "DO-Tests-1" }) ;

// assembly-to-schema map
// Model.Personnel -> DO-Tests, dbo
// Model.Sales -> DO-Tests, Sales
// Model.Production -> DO-Tests-1, dbo
// Model.Purchasing -> DO-Tests-1, Purchasing

// real name or alias can be used in rules
domainConfiguration.MappingRules
  .Map(typeof(Model.Personnel.Employee).Assembly, "Model.Personnel")
  .To("DO-Tests", "dbo");
domainConfiguration.MappingRules
  .Map(typeof(Model.Sales.Customer).Assembly, "Model.Sales")
  .To("DO-Tests", "Sales");
domainConfiguration.MappingRules
  .Map(typeof(Model.Production.Product).Assembly, "Model.Production")
  .To("DO-Tests-1", "dbo");
domainConfiguration.MappingRules
  .Map(typeof(Model.Purchasing.ShipMethod).Assembly, "Model.Purchasing")
  .To("DO-Tests-1", "Purchasing");

or in the configuration file

<domain name="DefaultDomain"
  connectionUrl="sqlserver://dotest:dotest@localhost/master"
  defaultSchema="dbo"
  defaultDatabase="DO-Tests">
    <databases>
      <database name="main" realName="DO-Tests" />
      <database name="additional" realName="DO-Tests-1" />
    </databases>
    <types>
      <add assembly="Model.Personnel"/>
      <add assembly="Model.Sales"/>
      <add assembly="Model.Production"/>
      <add assembly="Model.Purchasing"/>
    </types>
    <mappingRules>
      <rule assembly="Model.Personnel" database="DO-Tests" />
      <rule assembly="Model.Sales" database="main" />
      <rule assembly="Model.Production" database="DO-Tests-1"/>
      <rule assembly="Model.Purchasing" database="additional"/>
    </mappingRules>
</domain>

Such Domain configurations are supported only by MS SQL Server by now. Also these domains inherits limitations that have been mentioned earlier.

Limitations

Multi-database domains have some limitations to be aware of.

Firstly, although references between persistent types of different databases are available, it is impossible to create underlying foreign key for such references. DataObjects.Net has mechanisms to control reference integrity but just be aware.

Secondly, some relations between types may create cyclic database dependancies and it will lead to an exception. If for some reason such cases are not mistake but designed to be so then set DomainConfiguration.AllowCyclicDatabaseDependencies to true.

Also Entities of same hierarchy can’t be devided between databases.

Another issue that may appear in multi-database Domain is incompatibility of Keys. Generally, key generators have a marker of the database they belong to. It was done so to eliminate misinterpretations of key values. Default generators generate numeric values so generated values of generators from different databases may have same value which will be unique within database but will not within Domain. This extra marker of database gives extra protection.

This benefit can become drowback in some model designs, though. As an example, if there is a persistent interface and its implementations are mapped to different databases. Such model in a multi-database Domain will lead to an exception. If Domain model is realy need to have implementations being mapped to different databases you can force DataObjects.Net to allow such cases by setting DomainConfiguration.MultidatabaseKeys to true. This option allow generators to not have a database marker.

Now, when DataObjects.Net doesn’t check uniqueness of generated key values, make sure that generated keys are unique across Domain. One of options is to divide ranges in which generators work. For instance

domainConfiguration.KeyGenerators.Add(
  new KeyGeneratorConfiguration("Int64") { Seed = 128, CacheSize = 128, Database = "main" });
domainConfiguration.KeyGenerators.Add(
  new KeyGeneratorConfiguration("Int64") { Seed = 10000000, CacheSize = 128, Database = "additional" });

Actual seed values should be chosen according to amount of inserts to databases. But be aware that though this will only reduce risk of having a non-unique key on Domain level, there is a chance that in some point in time first generator will reach 10000000 value.

Another way to guarantee uniqueness of keys is to use Guid``s as ``Key values but such keys have their own drowbacks like they have no native historical insert order like numerics or inability to apply certain operators to them.

Ignoring certain items in database

DataObjects.Net allows to configure rules that will help to ignore particular parts of database - tables, table columns and indexes. The rules hide these parts from upgrade process which may be benefitial if, for example, the database/schema is shared between DataObjects.Net and another software which can create some tables or columns within tables or any other case when it is required to not let DataObjects.Net “see” some parts of database.

Every time when database structure is extracted during Domain building the rules are applied and ignored items become removed from extracted structure before it moves further to be compared with target structure.

If table or table columns is ignored then some other items can be ignored too:

  • table indexes. If table is ignored completely then all indexes that belong to it will be ignored. In case of ignored columns, any index that contains an ignored column will be ignored too.
  • foreign keys that reference ignored table or column.

Rules can be configured in DomainConfiguration.IgnoreRules collection. The collection contains IgnoreXxx() methods for each item that can be ignored. Depending on what was ingored API allows to continue constructing the rule by specifying schema, database and, in some cases, table.

Ignore table

Simpliest rule to ignore table can look like

var configuration = new DomainConfiguration("sqlserver://dotest:dotest@localhost/DO-Tests");
configuration.IgnoreRules.IgnoreTable("SomeSpecificTable");

A group of tables can be ignored by pattern (prefix or suffix)

domainConfiguration.IgnoreRules.IgnoreTable("SomePrefix*");

In configuration file these rules will look like

<domain name="IgnoreRuleConfigTest"
  connectionUrl="sqlserver://dotest:dotest@localhost/DO-Tests">
  <ignoreRules>
    <rule table="SomeSpecificTable"/>
    <rule table="SomePrefix*"/>
  </ignoreRules>
</domain>

Such rules are applicable to simple domains that uses only one database and one schema within (if schemas are supported). For domains that contains more than one schema or database an ignore rule might require to declare schema or/and database. API allow this.

For example if Domain has following configuration

//one database with multiple schemas in it
//either alias name or real name can be used
domainConfiguration.DefaultSchema = "dbo";

// you can read about mapping rules and multi-schema in one of previous topics
domainConfiguration.MappingRules
  .Map("DomainModelAssembly", "DomainModelAssembly.MainModel")
  .ToSchema("dbo");
domainConfiguration.MappingRules
  .Map("DomainModelAssembly", "DomainModelAssembly.ExtensionModel")
  .ToSchema("additional");

ignore rules may look like

domainConfiguration.IgnoreRules
  .IgnoreTable("HiddenTable").WhenSchema("dbo");

//or just
//domainConfiguration.IgnoreRules.IgnoreTable("HiddenTable");

domainConfiguration.IgnoreRules
  .IgnoreTable("AnotherHiddenTable").WhenSchema("additional");

Configuration file equivalent will look similar to this:

<domain name="IgnoreRuleConfigTest"
  connectionUrl="sqlserver://dotest:dotest@localhost/DO-Tests"
  defaultSchema="dbo">
  <mappingRules>
    <rule assembly="DomainModelAssembly"
      namespace="DomainModelAssembly.MainModel" schema="dbo" />
    <rule assembly="DomainModelAssembly"
      namespace="DomainModelAssembly.ExtensionModel" schema="additional"/>
  </mappingRules>
  <ignoreRules>
    <rule table="HiddenTable" schema="dbo"/>
    <rule table="AdditionalHiddenTable" schema="additional"/>
  </ignoreRules>
</domain>

Same principlal works for databases, if Domain is configured like so

// two databases, each uses only default schema

// declare databases
domainConfiguration.Databases.Add(
  new DatabaseConfiguration("main", "DO-Test-1"));
domainConfiguration.Databases.Add(
  new DatabaseConfiguration("additional", "DO-Tests-2");

// declare which of two databases is default
//either alias name or real name can be used
domainConfiguration.DefaultDatabase = "DO-Test-1";

// schema should be declared because multidatabase domains also multischema ones
// but since only default schema is used, no need to map types to schemas.
domainConfiguration.DefaultSchema = "dbo"

domainConfiguration.MappingRules
  .Map("DomainModelAssembly", "DomainModelAssembly.MainModel")
  .ToDatabase("main");
domainConfiguration.MappingRules
  .Map("DomainModelAssembly", "DomainModelAssembly.ExtensionModel")
  .ToDatabase("additional");

ignore rules may look like

domainConfiguration.IgnoreRules
.IgnoreTable(“HiddenTable”) .WhenDatabase(“DO-Test-1”);

//or just //domainConfiguration.IgnoreRules.IgnoreTable(“HiddenTable”);

domainConfiguration.IgnoreRules
.IgnoreTable(“AnotherHiddenTable”) .WhenDatabase(“additional”);

In configuration file it will look like

<domain name="IgnoreRuleConfigTest"
  connectionUrl="sqlserver://dotest:dotest@localhost/master"
  defaultDatabase="DO-Test-1"
  defaultSchema="dbo">
  <databases>
    <database name="main" realName="DO-Tests-1" />
    <database name="additional" realName="DO-Tests-2" />
  </databases>
  <mappingRules>
    <rule assembly="DomainModelAssembly"
      namespace="DomainModelAssembly.MainModel" database="main" />
    <rule assembly="DomainModelAssembly"
      namespace="DomainModelAssembly.ExtensionModel" database="additional"/>
  </mappingRules>
  <ignoreRules>
    <rule table="HiddenTable" database="main"/>
    <rule table="AdditionalHiddenTable" database="additional"/>
  </ignoreRules>
</domain>

If Domain includes several schemas in multiple databases apply both WhenXxx() methods with required parameters.

Ignore column or index

One or several columns or indexes within table (or tables) can be ignored. Exact names or patterns (prefix or suffix) are supported.

Here’s the examples: - specific column in particular table

::
domainConfiguration.IgnoreRules.IgnoreColumn(“HiddenComment”).WhenTable(“Customer”);
  • column with certain name in any table

    ::

    domainConfiguration.IgnoreRules.IgnoreColumn(“HiddenComment”);

  • group of columns in specific table

    ::

    domainConfiguration.IgnoreRules.IgnoreColumn(“Hidden*”).WhenTable(“Customer”);

  • group of columns in group of tables

    ::

    domainConfiguration.IgnoreRules.IgnoreColumn(“Hidden*”).WhenTable(“Customer*”);

  • specific index in particular table

    ::

    domainConfiguration.IgnoreRules.IgnoreIndex(“IX_HiddenIndex”).WhenTable(“Customer”);

  • index with certain name in any table

    ::

    domainConfiguration.IgnoreRules.IgnoreIndex(“IX_HiddenIndex”);

  • group of indexes in specific table

    ::

    domainConfiguration.IgnoreRules.IgnoreIndex(“IX_Hidden*”).WhenTable(“Customer”);

  • group of indexes in group of tables

    ::

    domainConfiguration.IgnoreRules.IgnoreIndex(“IX_Hidden*”).WhenTable(“Customer*”);

Notice that when table is not specified it means rule is applicable to ANY table.

Schema and database settings are available as well.

::
domainConfiguration.IgnoreRules.IgnoreColumn(“HiddenComment”)
.WhenTable(“Customer”) .WhenSchema(“Sales”) .WhenDatabase(“DO-Test-1”);

Storage nodes

Storage node is a concept that provides an apportunity to scale Domain horizontally - all storage nodes have the same tables structure, at the same time information each node stores is independant.

Each storage node has independant persistent types’ identifiers, key generators and DataObjects.Net system tables and information in them will.

By default, the Domain that DataObjects.Net builds already has one storage node so implicit use of storage node started once Domain is built. Additional nodes should be added manually. They should be configured and added to Domain, then they become available to work with.

Storage node configuration

The NodeConfiguration has following properties:

  • NodeId - string identifier for the node. This identifier will be used for node selection Default storage node has WellKnown.DefaultNodeId identifier which is string.Empty.
  • UpgradeMode - similarly to DomainConfiguration.UpgradeMode it sets the mode in which node will be upgraded.
  • ConnectionInfo - allows to set connection information (connection url or connection string with provider) different from DomainConfiguration.ConnectionInfo Some examples will be shown later down below.
  • ConnectionInitializationSql - similar to DomainConfiguration.ConnectionInitializationSql``, sets SQL code which will be executed just after connection to storage is opened.
  • SchemaMapping - rules that defines how to map schemas of default node to this one.
  • DatabaseMappig - rules that defined how to map databases of default node this one.

Lets say, there is a database which carries all the storage nodes which are in use, each in separate schema. Schemas the nodes will occupy are dbo (default schema), Alpha, Beta, Gamma, Delta. For this case storage node configurations may look like:

var domainConfiguration = new DomainConfiguration("sqlserver://dotest:dotest@localhost/DefaultDb");
domainConfiguration.UpgradeMode = DomainUpgradeMode.Recreate;
domainConfiguration.DefaultSchema = "dbo"; // default node is mapped to dbo schema

var alphaNodeConfig = new NodeConfiguration("Alpha") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
alphaNodeConfig.SchemaMapping.Add("dbo", "Alpha");

var betaNodeConfig = new NodeConfiguration("Beta") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
betaNodeConfig.SchemaMapping.Add("dbo", "Beta");

var gammaNodeConfig = new NodeConfiguration("Gamma") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
gammaNodeConfig.SchemaMapping.Add("dbo", "Gamma");

var deltaNodeConfig = new NodeConfiguration("Delta") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
deltaNodeConfig.SchemaMapping.Add("dbo", "Delta");

Notice how schema mappings are set.

Another possible scenario is when each storage node has a dedicated database within database server. Related configurations will be like so:

var domainConfiguration = new DomainConfiguration("sqlserver://dotest:dotest@localhost/master");
domainConfiguration.UpgradeMode = DomainUpgradeMode.Recreate;
domainConfiguration.DefaltDatabase = "DefaultDb";// default node is mapped to this database
domainConfiguration.DefaultSchema = "dbo"; // and to default node in the default database;

var alphaNodeConfig = new NodeConfiguration("Alpha") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
alphaNodeConfig.DatabaseMapping.Add("DefaultDb", "Alpha");

var betaNodeConfig = new NodeConfiguration("Beta") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
betaNodeConfig.DatabaseMapping.Add("DefaultDb", "Beta");

var gammaNodeConfig = new NodeConfiguration("Gamma") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
gammaNodeConfig.DatabaseMapping.Add("DefaultDb", "Gamma");

var deltaNodeConfig = new NodeConfiguration("Delta") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
deltaNodeConfig.DatabaseMapping.Add("DefaultDb", "Delta");

Each node will occupy default schema of its database - dbo;

Sometimes Domain model may be mapped to a number of schemas (multi-schema domain) within database. It is reasonable to have a dedicated database for each node in this case too. If databases have the same schema names then the only map that has to be done is database map, but if schema names are different for some reason then make sure all of them are mapped like in these configurations:

var domainConfiguration = new DomainConfiguration("sqlserver://dotest:dotest@localhost/master");
domainConfiguration.UpgradeMode = DomainUpgradeMode.Recreate;
domainConfiguration.DefaltDatabase = "DefaultDb";// default node is mapped to this database
domainConfiguration.DefaultSchema = "dbo"; // and to default node in the default database;

// destributes parts of domain model across several schemas by namespaces
domainConfiguration.MappingRules
  .Map(typeof(Model.Personnel.Employee).Assembly, "Model.Personnel")
  .ToSchema("dbo");

domainConfiguration.MappingRules
  .Map(typeof(Model.Sales.Customer).Assembly, "Model.Sales")
  .ToSchema("Sales");

domainConfiguration.MappingRules
  .Map(typeof(Model.Production.Product).Assembly, "Model.Production")
  .ToSchema("Production");

domainConfiguration.MappingRules
  .Map(typeof(Model.Purchasing.ShipMethod).Assembly)
  .ToSchema("Purchasing");

var additionalNode = new NodeConfiguration("Alpha") {
  UpgradeMode = DomainUpgradeMode.Recreate
};

// maps database of default node to additonal
additionalNode.DatabaseMapping.Add("DefaultDb", "AlphaDb");

// and also maps each schema default node uses to some schemas in AlphaDb.
// Skippable, if AlphaDb has the same schema names
additionalNode.SchemaMapping.Add("dbo", "dbo");
additionalNode.SchemaMapping.Add("Sales", "n1");
additionalNode.SchemaMapping.Add("Production", "n2");
additionalNode.SchemaMapping.Add("Purchasing", "n3");

Multi-schema nodes can also be placed in one database.

var domainConfiguration = new DomainConfiguration("sqlserver://dotest:dotest@localhost/DefaultDb");
domainConfiguration.UpgradeMode = DomainUpgradeMode.Recreate;
domainConfiguration.DefaultSchema = "dbo"; // and to default node in the default database;

domainConfiguration.MappingRules
  .Map(typeof(Model.Personnel.Employee).Assembly, "Model.Personnel")
  .ToSchema("dbo");

domainConfiguration.MappingRules
  .Map(typeof(Model.Sales.Customer).Assembly, "Model.Sales")
  .ToSchema("Sales");

domainConfiguration.MappingRules
  .Map(typeof(Model.Production.Product).Assembly, "Model.Production")
  .ToSchema("Production");

domainConfiguration.MappingRules
  .Map(typeof(Model.Purchasing.ShipMethod).Assembly)
  .ToSchema("Purchasing");

var additionalNode = new NodeConfiguration("Node1") {
  UpgradeMode = DomainUpgradeMode.Recreate
};

additionalNode.SchemaMapping.Add("dbo", "dbo1");
additionalNode.SchemaMapping.Add("Sales", "Sales1");
additionalNode.SchemaMapping.Add("Production", "Production1");
additionalNode.SchemaMapping.Add("Purchasing", "Purchasing1");

ConnectionInfo property can be used when nodes are in different database server (on the same machine or on a remote one), or in different databases within one RDBMS (an alternative to multi-database domain)

Lets say there are two database servers - one is running on the same machine where application runs, and another one is somewhere within the same network (or somewhere in the cloud, it doesn’t matter) called remote. The first instance has schemas - dbo and Alpha; the second one has schemas - dbo and Beta. Storage nodes may be configured like so

var local = @"sqlserver://dotest:dotest@localhost/DefaultDb";
var remote = @"sqlserver://dotest:dotest@DB2\SQLInstance/DefaultDb";

var domainConfiguration = new DomainConfiguration(local);
domainConfiguration.UpgradeMode = DomainUpgradeMode.Recreate;
domainConfiguration.DefaultSchema = "dbo";

var node1Config = new NodeConfiguration("Node1") {
  UpgradeMode = DomainUpgradeMode.Recreate
};
node1Config.SchemaMapping.Add("dbo", "Alpha");

// nodes on remote database server
var node2Config = new NodeConfiguration("Node2") {
  UpgradeMode = DomainUpgradeMode.Recreate,
  ConnectionInfo = new ConnectionInfo(remote)
};
// can be skipped
node2Config.SchemaMapping.Add("dbo", "dbo");

var node3Config = new NodeConfiguration("Node3") {
  UpgradeMode = DomainUpgradeMode.Recreate,
  ConnectionInfo = new ConnectionInfo(remote)
};
node3Config.SchemaMapping.Add("dbo", "Beta");

For some providers ConnectionInfo property is the only choice to have storage nodes in various databases because these providers have no support for multi-database Domain, but they have multi-schema Domain support, for example, PostgreSQL and Oracle providers.

Adding node to Domain or removing them from it

Since storage nodes attach to Domain, it should be build first, then you can start adding storage nodes. Once node is successfully built it can be selected to work with it later on. Lets use first example of configurations to show how to add storage notes:

var domain = Domain.Build(domainConfiguration);

var isBuilt = domain.StorageNodeManager.AddNode(alphaNodeConfig);
isBuilt = domain.StorageNodeManager.AddNode(betaNodeConfig);
isBuilt = domain.StorageNodeManager.AddNode(gammaNodeConfig);
isBuilt = domain.StorageNodeManager.AddNode(deltaNodeConfig);

Nodes can also be removed form Domain and free resources they occupied if they no longer in use. No changes of Domain instance, application continue working, very useful.

var isRemoved = domain.StorageNodeManager.RemoveNode("Alpha");
var isRemoved = domain.StorageNodeManager.RemoveNode("Beta");
var isRemoved = domain.StorageNodeManager.RemoveNode("Gamma");
var isRemoved = domain.StorageNodeManager.RemoveNode("Delta");

Access data of storage node

Working with a bunch of nodes is very similar to working with Domain. Starting from DataObjects.Net 7.0 storage node usage has changed and became more intuitive. In 6.0 and older versions it was required to open a session and then select desirable storage node for the session, now storage node is able to open sessions as well as Domain. Just call one of available OpenSession() methods.

Here’s an example of method which updates certain order in the given storage node.

public void FinishLastOrder(long customerId, string storageNodeId)
{
  var storageNode = domain.StorageNodeManager.GetNode(storageNodeId);

  using (var session = storageNode.OpenSession())
  using (var tx = session.OpenTransaction()) {
    var unfinishedOrder = session.Query.All<Order>()
      .Where(o => o.Customer.Id == customerId && o.State == OrderStates.Unfinished)
      .OrderByDescending(o => o.Date)
      .FirstOrDefault();
    if (unfinishedOrder != null) {
      unfinishedOrder.MarkFinished();
    }

    tx.Complete();
  }
}

Very easy.

With such design there is no chance to accidentally start working with default node instead of desirable one. Old style of working with nodes is available across in 7.0.x releases versions, but in future it can be removed completely.

Upgrade of storage node

Since storage nodes use the same persistent types model, they should be upgraded when the model has some changes, similarly to upgrading Domain (which is basically upgrade of default storage node). When storage node is added to a Domain it goes through the same building process like Domain with some nuances so “Database schema upgrade” part of the manual is applicable for storage nodes too.

IUpgradeHandler/UpgradeHandler implementations’ methods will be called as they would be called for main domain accordingto the upgrade mode set in the NodeConfiguration.UpgradeMode property for the building node. This allows to unilize the code that handled Domain upgrade with minor adjustments or event without them.

IModule and IModule2 implementations’ methods, though, will be called differently. For them only OnBuilt() method will be called most of the time. Other methods will be called only when storage node has Perform/PerformSafely upgraded mode and only one time per method (in oppose to building domain when there is two calls for each method) because in these modes there is temporary domain model which has to be build. Other upgrade modes share Domain model so no need to build it from scratch.

One more nuance is connected with UpgradeContext.ExtractedTypeMap and UpgradeContext.FullTypeMap collections. Though storage nodes share domain model, type identifiers may vary from node to node. In ideal case when all storage nodes are upgraded at the same time type identifiers will probaly be the same. In real life there might be scenarious when some of nodes might be upgraded more frequently then others.

StorageNode structure

StorageNode instance has not so many properties but some of them might be very useful depending on scenario.

  • NodeId - a string property which shows node identifier.
  • Configuration - return NodeConfiguration of this node.
  • TypeIdRegistry - collection of actual type identifiers for the node. Use this property instead of TypeInfo.TypeId property to get real type identifier of a persistent type.
  • Mapping - it contains TypeInfo-to-Table and SequenceInfo-to-Sequence (or SequenceInfo-to-Table, depending on database server).

The last property is very important if the code uses SqlDml API, for example

using (var session = domain.OpenSession()) {
  session.SelectStorageNode("Alpha");

  using (var tx = session.OpenTransaction()) {
    var typeInfo = session.Domain.Model.Types[typeof(Customer)];

    // getting Table for this node
    var customerTable = session.StorageNode.Mapping[typeInfo];
    var tableRef = SqlDml.TableRef(customerTable);

    var select = SqlDml.Select(tableRef);
    select.Columns.Add(tableRef["Id"]);
    select.Columns.Add(tableRef["FirstName"]);
    select.Columns.Add(tableRef["LastName"]);

    var queryBuilder = session.Services
      .Get<Xtensive.Orm.Services.QueryBuilder>();

    var commandText = queryBuilder.CompileQuery(select);
    var request = queryBuilder.CreateRequest(commandtext,
      Enumerable.Empty<Xtensive.Orm.Services.QueryParameterBinding>());

    using (var command = queryBuilder.CreateCommand(request))
    using (var reader = command.ExecuteReader()){
      ProcessResults(reader);
    }
  }
}

Limitations

There are some limitations to use storage nodes

  • Provider should support multi-schema. This shorten available providers list to MS SQL Server, PostgreSQL and Oracle providers.
  • All nodes should be of the same provider so they would have the save feature set. There is a way to circumvent the second requirement to a certain extend by using DomainConfiguration.ForcedServerVersion option (“Configuring Domain” part of this manual).